165 research outputs found

    Multilevel quasiseparable matrices in PDE-constrained optimization

    Get PDF
    Optimization problems with constraints in the form of a partial differential equation arise frequently in the process of engineering design. The discretization of PDE-constrained optimization problems results in large-scale linear systems of saddle-point type. In this paper we propose and develop a novel approach to solving such systems by exploiting so-called quasiseparable matrices. One may think of a usual quasiseparable matrix as of a discrete analog of the Green's function of a one-dimensional differential operator. Nice feature of such matrices is that almost every algorithm which employs them has linear complexity. We extend the application of quasiseparable matrices to problems in higher dimensions. Namely, we construct a class of preconditioners which can be computed and applied at a linear computational cost. Their use with appropriate Krylov methods leads to algorithms of nearly linear complexity

    Convergence Analysis of an Inexact Feasible Interior Point Method for Convex Quadratic Programming

    Get PDF
    In this paper we will discuss two variants of an inexact feasible interior point algorithm for convex quadratic programming. We will consider two different neighbourhoods: a (small) one induced by the use of the Euclidean norm which yields a short-step algorithm and a symmetric one induced by the use of the infinity norm which yields a (practical) long-step algorithm. Both algorithms allow for the Newton equation system to be solved inexactly. For both algorithms we will provide conditions for the level of error acceptable in the Newton equation and establish the worst-case complexity results

    Implementation of an Interior Point Method with Basis Preconditioning

    Get PDF

    An Interior-Point-Inspired algorithm for Linear Programs arising in Discrete Optimal Transport

    Full text link
    Discrete Optimal Transport problems give rise to very large linear programs (LP) with a particular structure of the constraint matrix. In this paper we present a hybrid algorithm that mixes an interior point method (IPM) and column generation, specialized for the LP originating from the Kantorovich Optimal Transport problem. Knowing that optimal solutions of such problems display a high degree of sparsity, we propose a column-generation-like technique to force all intermediate iterates to be as sparse as possible. The algorithm is implemented nearly matrix-free. Indeed, most of the computations avoid forming the huge matrices involved and solve the Newton system using only a much smaller Schur complement of the normal equations. We prove theoretical results about the sparsity pattern of the optimal solution, exploiting the graph structure of the underlying problem. We use these results to mix iterative and direct linear solvers efficiently, in a way that avoids producing preconditioners or factorizations with excessive fill-in and at the same time guaranteeing a low number of conjugate gradient iterations. We compare the proposed method with two state-of-the-art solvers and show that it can compete with the best network optimization tools in terms of computational time and memory usage. We perform experiments with problems reaching more than four billion variables and demonstrate the robustness of the proposed method

    Training very large scale nonlinear SVMs using Alternating Direction Method of Multipliers coupled with the Hierarchically Semi-Separable kernel approximations

    Get PDF
    Typically, nonlinear Support Vector Machines (SVMs) produce significantly higher classification quality when compared to linear ones but, at the same time, their computational complexity is prohibitive for large-scale datasets: this drawback is essentially related to the necessity to store and manipulate large, dense and unstructured kernel matrices. Despite the fact that at the core of training a SVM there is a \textit{simple} convex optimization problem, the presence of kernel matrices is responsible for dramatic performance reduction, making SVMs unworkably slow for large problems. Aiming to an efficient solution of large-scale nonlinear SVM problems, we propose the use of the \textit{Alternating Direction Method of Multipliers} coupled with \textit{Hierarchically Semi-Separable} (HSS) kernel approximations. As shown in this work, the detailed analysis of the interaction among their algorithmic components unveils a particularly efficient framework and indeed, the presented experimental results demonstrate a significant speed-up when compared to the \textit{state-of-the-art} nonlinear SVM libraries (without significantly affecting the classification accuracy)
    • …
    corecore